1,933 research outputs found

    The Moser-Tardos Framework with Partial Resampling

    Full text link
    The resampling algorithm of Moser \& Tardos is a powerful approach to develop constructive versions of the Lov\'{a}sz Local Lemma (LLL). We generalize this to partial resampling: when a bad event holds, we resample an appropriately-random subset of the variables that define this event, rather than the entire set as in Moser & Tardos. This is particularly useful when the bad events are determined by sums of random variables. This leads to several improved algorithmic applications in scheduling, graph transversals, packet routing etc. For instance, we settle a conjecture of Szab\'{o} & Tardos (2006) on graph transversals asymptotically, and obtain improved approximation ratios for a packet routing problem of Leighton, Maggs, & Rao (1994)

    Improved bounds and algorithms for graph cuts and network reliability

    Full text link
    Karger (SIAM Journal on Computing, 1999) developed the first fully-polynomial approximation scheme to estimate the probability that a graph GG becomes disconnected, given that its edges are removed independently with probability pp. This algorithm runs in n5+o(1)ϵ3n^{5+o(1)} \epsilon^{-3} time to obtain an estimate within relative error ϵ\epsilon. We improve this run-time through algorithmic and graph-theoretic advances. First, there is a certain key sub-problem encountered by Karger, for which a generic estimation procedure is employed, we show that this has a special structure for which a much more efficient algorithm can be used. Second, we show better bounds on the number of edge cuts which are likely to fail. Here, Karger's analysis uses a variety of bounds for various graph parameters, we show that these bounds cannot be simultaneously tight. We describe a new graph parameter, which simultaneously influences all the bounds used by Karger, and obtain much tighter estimates of the cut structure of GG. These techniques allow us to improve the runtime to n3+o(1)ϵ2n^{3+o(1)} \epsilon^{-2}, our results also rigorously prove certain experimental observations of Karger & Tai (Proc. ACM-SIAM Symposium on Discrete Algorithms, 1997). Our rigorous proofs are motivated by certain non-rigorous differential-equation approximations which, however, provably track the worst-case trajectories of the relevant parameters. A key driver of Karger's approach (and other cut-related results) is a bound on the number of small cuts: we improve these estimates when the min-cut size is "small" and odd, augmenting, in part, a result of Bixby (Bulletin of the AMS, 1974)

    Algorithmic and enumerative aspects of the Moser-Tardos distribution

    Full text link
    Moser & Tardos have developed a powerful algorithmic approach (henceforth "MT") to the Lovasz Local Lemma (LLL); the basic operation done in MT and its variants is a search for "bad" events in a current configuration. In the initial stage of MT, the variables are set independently. We examine the distributions on these variables which arise during intermediate stages of MT. We show that these configurations have a more or less "random" form, building further on the "MT-distribution" concept of Haeupler et al. in understanding the (intermediate and) output distribution of MT. This has a variety of algorithmic applications; the most important is that bad events can be found relatively quickly, improving upon MT across the complexity spectrum: it makes some polynomial-time algorithms sub-linear (e.g., for Latin transversals, which are of basic combinatorial interest), gives lower-degree polynomial run-times in some settings, transforms certain super-polynomial-time algorithms into polynomial-time ones, and leads to Las Vegas algorithms for some coloring problems for which only Monte Carlo algorithms were known. We show that in certain conditions when the LLL condition is violated, a variant of the MT algorithm can still produce a distribution which avoids most of the bad events. We show in some cases this MT variant can run faster than the original MT algorithm itself, and develop the first-known criterion for the case of the asymmetric LLL. This can be used to find partial Latin transversals -- improving upon earlier bounds of Stein (1975) -- among other applications. We furthermore give applications in enumeration, showing that most applications (where we aim for all or most of the bad events to be avoided) have many more solutions than known before by proving that the MT-distribution has "large" min-entropy and hence that its support-size is large

    Partial resampling to approximate covering integer programs

    Full text link
    We consider column-sparse covering integer programs, a generalization of set cover, which have a long line of research of (randomized) approximation algorithms. We develop a new rounding scheme based on the Partial Resampling variant of the Lov\'{a}sz Local Lemma developed by Harris & Srinivasan (2019). This achieves an approximation ratio of 1+ln(Δ1+1)amin+O(log(1+log(Δ1+1)amin)1 + \frac{\ln (\Delta_1+1)}{a_{\min}} + O\Big( \log(1 + \sqrt{ \frac{\log (\Delta_1+1)}{a_{\min}}} \Big), where amina_{\min} is the minimum covering constraint and Δ1\Delta_1 is the maximum 1\ell_1-norm of any column of the covering matrix (whose entries are scaled to lie in [0,1][0,1]). When there are additional constraints on the variable sizes, we show an approximation ratio of lnΔ0+O(loglogΔ0)\ln \Delta_0 + O(\log \log \Delta_0) (where Δ0\Delta_0 is the maximum number of non-zero entries in any column of the covering matrix). These results improve asymptotically, in several different ways, over results of Srinivasan (2006) and Kolliopoulos & Young (2005). We show nearly-matching inapproximability and integrality-gap lower bounds. We also show that the rounding process leads to negative correlation among the variables, which allows us to handle multi-criteria programs

    Dependent randomized rounding for clustering and partition systems with knapsack constraints

    Full text link
    Clustering problems are fundamental to unsupervised learning. There is an increased emphasis on fairness in machine learning and AI; one representative notion of fairness is that no single demographic group should be over-represented among the cluster-centers. This, and much more general clustering problems, can be formulated with "knapsack" and "partition" constraints. We develop new randomized algorithms targeting such problems, and study two in particular: multi-knapsack median and multi-knapsack center. Our rounding algorithms give new approximation and pseudo-approximation algorithms for these problems. One key technical tool, which may be of independent interest, is a new tail bound analogous to Feige (2006) for sums of random variables with unbounded variances. Such bounds are very useful in inferring properties of large networks using few samples

    On Computing Maximal Independent Sets of Hypergraphs in Parallel

    Full text link
    Whether or not the problem of finding maximal independent sets (MIS) in hypergraphs is in (R)NC is one of the fundamental problems in the theory of parallel computing. Unlike the well-understood case of MIS in graphs, for the hypergraph problem, our knowledge is quite limited despite considerable work. It is known that the problem is in \emph{RNC} when the edges of the hypergraph have constant size. For general hypergraphs with nn vertices and mm edges, the fastest previously known algorithm works in time O(n)O(\sqrt{n}) with poly(m,n)\text{poly}(m,n) processors. In this paper we give an EREW PRAM algorithm that works in time no(1)n^{o(1)} with poly(m,n)\text{poly}(m,n) processors on general hypergraphs satisfying mnlog(2)n8(log(3)n)2m \leq n^{\frac{\log^{(2)}n}{8(\log^{(3)}n)^2}}, where log(2)n=loglogn\log^{(2)}n = \log\log n and log(3)n=logloglogn\log^{(3)}n = \log\log\log n. Our algorithm is based on a sampling idea that reduces the dimension of the hypergraph and employs the algorithm for constant dimension hypergraphs as a subroutine

    Need for national policy to recover endangered species

    Get PDF
    India is bestowed with world’s four mega-biodiversity hotspots. In fact, India is the only country that is blessed so many of these biodiversity regions. However, this rich biodiversity is under severe threat owing to the increasing population as well as indiscriminate extraction from natural populations. Unplanned land use in the name of economic development have rendered a number of species in the under the threatened category. In the most recent update, the International Union for Conservation of Nature (IUCN, 2016) assigned a total of 1052 species as red listed. Of these, 75 animals and 77 plants are in the critically endangered list with many others being in the endangered and vulnerable categories. What is even more worrying is the fact that a large number of species have been reduced to incredibly small numbers due to either habitat degradation or illegal hunting/harvesting. Unless immediate measures are taken up, a number of these species could be in the red-list within a matter of few years. Unfortunately as of now, except for few attempts, there has been no concerted program in the country to address the restoration of the threatened species

    Approximation algorithms for stochastic clustering

    Full text link
    We consider stochastic settings for clustering, and develop provably-good approximation algorithms for a number of these notions. These algorithms yield better approximation ratios compared to the usual deterministic clustering setting. Additionally, they offer a number of advantages including clustering which is fairer and has better long-term behavior for each user. In particular, they ensure that *every user* is guaranteed to get good service (on average). We also complement some of these with impossibility results
    corecore